一场堆放堡拥堵游戏(SCG)是一个双重计划,领导者的目标是通过预测和操纵均衡状态来最大程度地提高自己的收益,在该状态下,追随者通过玩拥堵游戏而定居。大规模的SCG以其顽固性和复杂性而闻名。这项研究通过可区分的编程来处理SCG,该编程将机器学习的最新发展与常规方法结合在一起。核心思想以模仿logit动力学形成的进化路径代表低级平衡问题。它可以在朝着平衡的演化路径上使用自动分化,从而导致双环梯度下降算法。我们进一步表明,对低级平衡的固定可能是一个自我强加的计算障碍。取而代之的是,领导者只能沿着追随者的演变路径向前看几个步骤,同时通过共同进化过程更新其决策。启示产生了一种单循环算法,该算法在记忆消耗和计算时间方面都更有效。通过涵盖广泛基准问题的数值实验,我们发现单循环算法始终达到解决方案质量和效率之间的良好平衡,不仅优于标准的双环实现,而且优于文献中的其他方法。重要的是,我们的结果既突出了“充分期待”的浪费和“零预期”的危险。如果需要快速启发术来解决一个非常大的SCG,则提议的单环算法具有一步的外观,使其成为理想的候选人。
translated by 谷歌翻译
RNA结构的确定和预测可以促进靶向RNA的药物开发和可用的共性元素设计。但是,由于RNA的固有结构灵活性,所有三种主流结构测定方法(X射线晶体学,NMR和Cryo-EM)在解决RNA结构时会遇到挑战,这导致已解决的RNA结构的稀缺性。计算预测方法作为实验技术的补充。但是,\ textit {de从头}的方法都不基于深度学习,因为可用的结构太少。取而代之的是,他们中的大多数采用了耗时的采样策略,而且它们的性能似乎达到了高原。在这项工作中,我们开发了第一种端到端的深度学习方法E2FOLD-3D,以准确执行\ textit {de de novo} RNA结构预测。提出了几个新的组件来克服数据稀缺性,例如完全不同的端到端管道,二级结构辅助自我鉴定和参数有效的骨干配方。此类设计在独立的,非重叠的RNA拼图测试数据集上进行了验证,并达到平均sub-4 \ aa {}根平方偏差,与最先进的方法相比,它表现出了优越的性能。有趣的是,它在预测RNA复杂结构时也可以取得令人鼓舞的结果,这是先前系统无法完成的壮举。当E2FOLD-3D与实验技术耦合时,RNA结构预测场可以大大提高。
translated by 谷歌翻译
This paper presents a practical global optimization algorithm for the K-center clustering problem, which aims to select K samples as the cluster centers to minimize the maximum within-cluster distance. This algorithm is based on a reduced-space branch and bound scheme and guarantees convergence to the global optimum in a finite number of steps by only branching on the regions of centers. To improve efficiency, we have designed a two-stage decomposable lower bound, the solution of which can be derived in a closed form. In addition, we also propose several acceleration techniques to narrow down the region of centers, including bounds tightening, sample reduction, and parallelization. Extensive studies on synthetic and real-world datasets have demonstrated that our algorithm can solve the K-center problems to global optimal within 4 hours for ten million samples in the serial mode and one billion samples in the parallel mode. Moreover, compared with the state-of-the-art heuristic methods, the global optimum obtained by our algorithm can averagely reduce the objective function by 25.8% on all the synthetic and real-world datasets.
translated by 谷歌翻译
现有的计算机视觉系统可以与人类竞争,以理解物体的可见部分,但在描绘部分被遮挡物体的无形部分时,仍然远远远远没有达到人类。图像Amodal的完成旨在使计算机具有类似人类的Amodal完成功能,以了解完整的对象,尽管该对象被部分遮住。这项调查的主要目的是对图像Amodal完成领域的研究热点,关键技术和未来趋势提供直观的理解。首先,我们对这个新兴领域的最新文献进行了全面的评论,探讨了图像Amodal完成中的三个关键任务,包括Amodal形状完成,Amodal外观完成和订单感知。然后,我们检查了与图像Amodal完成有关的流行数据集及其共同的数据收集方法和评估指标。最后,我们讨论了现实世界中的应用程序和未来的研究方向,以实现图像的完成,从而促进了读者对现有技术和即将到来的研究趋势的挑战的理解。
translated by 谷歌翻译
随着图表和图表学习的开发,已经提出了许多优越的方法来处理图形结构学习的可扩展性和过度厚度问题。但是,大多数策略都是基于实践经验而不是理论分析而设计的。在本文中,我们使用连接到所有现有顶点的特定虚拟节点,而不会影响原始顶点和边缘属性。我们进一步证明,这种虚拟节点可以帮助构建有效的单态边缘到vertex变换,并呈现呈呈倒数,以恢复原始图。这也表明,添加虚拟节点可以保留本地和全局结构,以更好地图表表示。我们扩展了具有虚拟节点的图形内核和图形神经网络,并在图形分类和子图同构匹配任务上进行实验。经验结果表明,以虚拟节点为输入的图表显着增强了图形结构学习,并且使用其边缘到vertex图也可以实现相似的结果。我们还讨论了神经网络中假人的表达能力的增长。
translated by 谷歌翻译
In this paper, we propose a robust 3D detector, named Cross Modal Transformer (CMT), for end-to-end 3D multi-modal detection. Without explicit view transformation, CMT takes the image and point clouds tokens as inputs and directly outputs accurate 3D bounding boxes. The spatial alignment of multi-modal tokens is performed implicitly, by encoding the 3D points into multi-modal features. The core design of CMT is quite simple while its performance is impressive. CMT obtains 73.0% NDS on nuScenes benchmark. Moreover, CMT has a strong robustness even if the LiDAR is missing. Code will be released at https://github.com/junjie18/CMT.
translated by 谷歌翻译
Dataset distillation has emerged as a prominent technique to improve data efficiency when training machine learning models. It encapsulates the knowledge from a large dataset into a smaller synthetic dataset. A model trained on this smaller distilled dataset can attain comparable performance to a model trained on the original training dataset. However, the existing dataset distillation techniques mainly aim at achieving the best trade-off between resource usage efficiency and model utility. The security risks stemming from them have not been explored. This study performs the first backdoor attack against the models trained on the data distilled by dataset distillation models in the image domain. Concretely, we inject triggers into the synthetic data during the distillation procedure rather than during the model training stage, where all previous attacks are performed. We propose two types of backdoor attacks, namely NAIVEATTACK and DOORPING. NAIVEATTACK simply adds triggers to the raw data at the initial distillation phase, while DOORPING iteratively updates the triggers during the entire distillation procedure. We conduct extensive evaluations on multiple datasets, architectures, and dataset distillation techniques. Empirical evaluation shows that NAIVEATTACK achieves decent attack success rate (ASR) scores in some cases, while DOORPING reaches higher ASR scores (close to 1.0) in all cases. Furthermore, we conduct a comprehensive ablation study to analyze the factors that may affect the attack performance. Finally, we evaluate multiple defense mechanisms against our backdoor attacks and show that our attacks can practically circumvent these defense mechanisms.
translated by 谷歌翻译
Automatic music generation with artificial intelligence typically requires a large amount of data which is hard to obtain for many less common genres and musical instruments. To tackle this issue, we present ongoing work and preliminary findings on the possibility for deep models to transfer knowledge from language to music, by finetuning large language models pre-trained on a massive text corpus on only hundreds of MIDI files of drum performances. We show that by doing so, one of the largest, state-of-the-art models (GPT3) is capable of generating reasonable drum grooves, while models that are not pre-trained (Transformer) shows no such ability beyond naive repetition. Evaluating generated music is a challenging task, more so is evaluating drum grooves with little precedence in literature. Hence, we propose a tailored structural evaluation method and analyze drum grooves produced by GPT3 compared to those played by human professionals, exposing the strengths and weaknesses of such generation by language-to-music transfer. Our findings suggest that language-to-music transfer learning with large language models is viable and promising.
translated by 谷歌翻译
Few Shot Instance Segmentation (FSIS) requires models to detect and segment novel classes with limited several support examples. In this work, we explore a simple yet unified solution for FSIS as well as its incremental variants, and introduce a new framework named Reference Twice (RefT) to fully explore the relationship between support/query features based on a Transformer-like framework. Our key insights are two folds: Firstly, with the aid of support masks, we can generate dynamic class centers more appropriately to re-weight query features. Secondly, we find that support object queries have already encoded key factors after base training. In this way, the query features can be enhanced twice from two aspects, i.e., feature-level and instance-level. In particular, we firstly design a mask-based dynamic weighting module to enhance support features and then propose to link object queries for better calibration via cross-attention. After the above steps, the novel classes can be improved significantly over our strong baseline. Additionally, our new framework can be easily extended to incremental FSIS with minor modification. When benchmarking results on the COCO dataset for FSIS, gFSIS, and iFSIS settings, our method achieves a competitive performance compared to existing approaches across different shots, e.g., we boost nAP by noticeable +8.2/+9.4 over the current state-of-the-art FSIS method for 10/30-shot. We further demonstrate the superiority of our approach on Few Shot Object Detection. Code and model will be available.
translated by 谷歌翻译
Graph Neural Networks (GNNs) have shown satisfying performance on various graph learning tasks. To achieve better fitting capability, most GNNs are with a large number of parameters, which makes these GNNs computationally expensive. Therefore, it is difficult to deploy them onto edge devices with scarce computational resources, e.g., mobile phones and wearable smart devices. Knowledge Distillation (KD) is a common solution to compress GNNs, where a light-weighted model (i.e., the student model) is encouraged to mimic the behavior of a computationally expensive GNN (i.e., the teacher GNN model). Nevertheless, most existing GNN-based KD methods lack fairness consideration. As a consequence, the student model usually inherits and even exaggerates the bias from the teacher GNN. To handle such a problem, we take initial steps towards fair knowledge distillation for GNNs. Specifically, we first formulate a novel problem of fair knowledge distillation for GNN-based teacher-student frameworks. Then we propose a principled framework named RELIANT to mitigate the bias exhibited by the student model. Notably, the design of RELIANT is decoupled from any specific teacher and student model structures, and thus can be easily adapted to various GNN-based KD frameworks. We perform extensive experiments on multiple real-world datasets, which corroborates that RELIANT achieves less biased GNN knowledge distillation while maintaining high prediction utility.
translated by 谷歌翻译